Artificial intelligence is fast becoming more ubiquitous these days. You can see it being used in almost anything from self-driving cars, to your Google assistant, and even predicting diseases with astounding accuracy-in less time it would take you to make your own coffee, too.
But AI, like any potentially useful piece of technology, needs to be regulated. The United States government knows this, which is why there's already AI regulation in place. So what kind of guidelines are already in place or being worked on, to ensure that we don't end up like the world in the "Terminator" movies?
Early Steps
According to ScreenRant, the first step from the US government to regulate AI was taken back in 2020. This was the so-called National Artificial Intelligence Initiative Act, which was passed into law a year later.
Among its main goals is to set up and assign tasks to committees who will do the bulk of the AI regulation. Aside from that, it also aims to define artificial intelligence as follows:
"A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments."
One of the committees to handle the job of actually regulating AI is the National Artificial Intelligence Research Resource Task Force, or NAIRRTF. Their main purpose is to see how AI affects US citizens as a whole. This could include the technology's application into certain things involving privacy, like facial recognition software.
As per a statement by the NAIRRTF, their job is to "manage issues" related to civil rights, civil liberties, and privacy" as they relate to the application of AI as a whole.
Read Also : Ukraine is Using Clearview AI Facial Recognition Technology to Identify Russian Soldiers Who Died in Combat
Cooperating With The World
As of this year, the United States government and the EU are "starting to align" with AI regulation, as per an article by the American public policy non-profit Brookings. Among the biggest goals of this cooperation between the US and the EU is to both implement "meaningful" oversight on AI, while also still enabling it to be made.
By cooperating with each other, these countries look to create a "unified, international approach" to governing the use and production of AI. Common oversight can be improved tenfold, and every country would be promoting their best practices to achieve a common goal.
It is also likely that this cooperation is due in part to the countries realizing the immense benefits of using artificial intelligence for the world. Take the fight against the global climate crisis, for example. Scientists at the University of Waterloo are training a deep-learning model to identify climate change "tipping points," which could help inform us of when the world's climate crisis would be going past the point of no return.
AI Is Not 'Dangerous' In The Way You Know
Now, to some people, regulating AI could mean that it still poses a risk-perhaps an existential one-to humanity. But take solace in the truth that current-generation AI is far, far below your fears of "the ghost in the machine."
ScienceAlert argues that AI taking over the world and driving humanity to extinction is impossible because of one thing: our lack of capability to even create one smarter than we are. The artificial intelligence you see and even use in daily life is extremely limited in its capabilities. What you fear as a self-aware, super-intelligent "true" AI is still decades (perhaps even centuries) away from being made.
Regulating current-gen artificial intelligence is more about ensuring that it won't be used by people to spy on you. That's it. So sleep soundly tonight knowing that your Amazon Alexa is NOT going to plot to take over the world-with no law stopping it from doing so.
This article is owned by Tech Times
Written by RJ Pierce